There is increasing adoption of artificial intelligence in drug discovery. However, existing works use machine learning to mainly utilize the chemical structures of molecules yet ignore the vast textual knowledge available in chemistry. Incorporating textual knowledge enables us to realize new drug design objectives, adapt to text-based instructions, and predict complex biological activities. We present a multi-modal molecule structure-text model, MoleculeSTM, by jointly learning molecule's chemical structures and textual descriptions via a contrastive learning strategy. To train MoleculeSTM, we construct the largest multi-modal dataset to date, namely PubChemSTM, with over 280K chemical structure-text pairs. To demonstrate the effectiveness and utility of MoleculeSTM, we design two challenging zero-shot tasks based on text instructions, including structure-text retrieval and molecule editing. MoleculeSTM possesses two main properties: open vocabulary and compositionality via natural language. In experiments, MoleculeSTM obtains the state-of-the-art generalization ability to novel biochemical concepts across various benchmarks.
translated by 谷歌翻译
与置换不变的代理框架的合作多元化学习(MARL)在现实世界应用中取得了巨大的经验成功。不幸的是,由于许多代理商的诅咒以及对现有作品中的关系推理的有限探索,对这个MARL问题的理论理解缺乏。在本文中,我们验证了变压器是否实现了复杂的关系推理,并提出和分析了与变压器近似器的无模型和基于模型的离线MARL算法。我们证明,基于模型和基于模型的算法的次级次数差距分别与代理数量分别独立于和对数,这减轻了许多试剂的诅咒。这些结果是变压器的新概括误差结合的结果以及对变压器系统动力学的最大似然估计(MLE)的新分析。我们的基于模型的算法是第一个明确利用代理的置换不变性的可证明有效的MARL算法。
translated by 谷歌翻译
Strengthening the robustness of machine learning-based Android malware detectors in the real world requires incorporating realizable adversarial examples (RealAEs), i.e., AEs that satisfy the domain constraints of Android malware. However, existing work focuses on generating RealAEs in the problem space, which is known to be time-consuming and impractical for adversarial training. In this paper, we propose to generate RealAEs in the feature space, leading to a simpler and more efficient solution. Our approach is driven by a novel interpretation of Android malware properties in the feature space. More concretely, we extract feature-space domain constraints by learning meaningful feature dependencies from data and applying them by constructing a robust feature space. Our experiments on DREBIN, a well-known Android malware detector, demonstrate that our approach outperforms the state-of-the-art defense, Sec-SVM, against realistic gradient- and query-based attacks. Additionally, we demonstrate that generating feature-space RealAEs is faster than generating problem-space RealAEs, indicating its high applicability in adversarial training. We further validate the ability of our learned feature-space domain constraints in representing the Android malware properties by showing that (i) re-training detectors with our feature-space RealAEs largely improves model performance on similar problem-space RealAEs and (ii) using our feature-space domain constraints can help distinguish RealAEs from unrealizable AEs (unRealAEs).
translated by 谷歌翻译
深度加强学习(DRL)在游戏和机器人控制等应用中彻底改变了学习和致动。数据收集的成本,即从代理环境互动产生转变,仍然是在复杂的现实问题中更广泛的DRL采用的重大挑战。在GPU云平台上培训DRL代理的云原生范例是一个有前途的解决方案。在本文中,我们为云天然深层加固学习提供了一种可扩展和弹性图书馆优雅的钢茶,其有效地支持数百万GPU核心,以便在多个层面进行大规模平行的训练。在一个高级别的优雅普罗拉科尔使用基于锦标赛的集合计划,以协调数百个甚至数千个GPU的培训过程,安排排行榜与培训池与数百个豆荚之间的相互作用。在低级,每个POD通过在单个GPU中充分利用近7,000个GPU CUDA核心,模拟了代理环境的交互。我们的优雅RL-Podracer Library通过遵循集装箱,微服务和MLOPS的开发原则,具有高可扩展性,弹性和可访问性。使用NVIDIA DGX SuperPod Cloud,我们对机器人和股票交易中的各种任务进行了广泛的实验,并表明Elegitrl-Podracer大大优于Rllib。我们的代码可在GitHub上获得。
translated by 谷歌翻译
最近的工作表明,难以察觉的扰动可以应用于工艺未被动实施例(ULE),即其内容不能用于改善训练期间的分类器的图像。在本文中,我们揭示了研究人员应遵循的道路,因为它们最初制定了(Uleos)。本文进行了四项贡献。首先,我们展示了Uleos利用颜色,因此,可以通过简单的灰度预过滤来减轻它们的效果,而无需诉诸对抗性培训。其次,我们向Uleos提出了一个延伸,它被称为uleo-grayaugs,这将通过在优化期间利用灰度知识和数据增强来迫使所产生的ules远离频道明智的颜色扰动。第三,我们表明,在复杂的卷积神经网络(CNN)分类器的情况下,使用多层的Perceptrons(MLP)产生的Uleos是有效的,这表明CNN遭受了对电机的特定漏洞。第四,我们证明当分类器培训ULEOS时,对抗性训练将防止在清洁图像和对抗性图像上测量的准确度。在一起,我们的贡献代表了不可见的例子的艺术状态的大量进展,但也揭示了他们行为的重要特征,必须更好地理解,以实现进一步的改进。
translated by 谷歌翻译
深度神经网络已被证明容易受到对抗图像的影响。常规攻击努力争取严格限制扰动的不可分割的对抗图像。最近,研究人员已采取行动探索可区分但非奇异的对抗图像,并证明色彩转化攻击是有效的。在这项工作中,我们提出了对抗颜色过滤器(ADVCF),这是一种新颖的颜色转换攻击,在简单颜色滤波器的参数空间中通过梯度信息进行了优化。特别是,明确指定了我们的颜色滤波器空间,以便从攻击和防御角度来对对抗性色转换进行系统的鲁棒性分析。相反,由于缺乏这种明确的空间,现有的颜色转换攻击并不能为系统分析提供机会。我们通过用户研究进一步进行了对成功率和图像可接受性的不同颜色转化攻击之间的广泛比较。其他结果为在另外三个视觉任务中针对ADVCF的模型鲁棒性提供了有趣的新见解。我们还强调了ADVCF的人类解剖性,该advcf在实际使用方案中有希望,并显示出比对图像可接受性和效率的最新人解释的色彩转化攻击的优越性。
translated by 谷歌翻译
股票交易策略在投资公司中起着至关重要的作用。但是,在复杂而动态的股票市场中获得最佳策略是一项挑战。我们探索了深入学习的潜力,以优化股票交易策略,从而最大程度地提高投资回报。选择30个股票作为我们的贸易股票,其日用价格被用作培训和交易市场环境。我们培训一个深入的增强学习代理,并获得自适应交易策略。评估了代理商的绩效,并将其与道琼斯工业平均水平和传统的最小变化投资组合分配策略进行了比较。拟议的深钢筋学习方法显示出在夏普比和累积回报方面都优于两个基准。
translated by 谷歌翻译
Learning policies from fixed offline datasets is a key challenge to scale up reinforcement learning (RL) algorithms towards practical applications. This is often because off-policy RL algorithms suffer from distributional shift, due to mismatch between dataset and the target policy, leading to high variance and over-estimation of value functions. In this work, we propose variance regularization for offline RL algorithms, using stationary distribution corrections. We show that by using Fenchel duality, we can avoid double sampling issues for computing the gradient of the variance regularizer. The proposed algorithm for offline variance regularization (OVAR) can be used to augment any existing offline policy optimization algorithms. We show that the regularizer leads to a lower bound to the offline policy optimization objective, which can help avoid over-estimation errors, and explains the benefits of our approach across a range of continuous control domains when compared to existing state-of-the-art algorithms.
translated by 谷歌翻译
Motivated by the human-machine interaction such as training chatbots for improving customer satisfaction, we study human-guided human-machine interaction involving private information. We model this interaction as a two-player turn-based game, where one player (Alice, a human) guides the other player (Bob, a machine) towards a common goal. Specifically, we focus on offline reinforcement learning (RL) in this game, where the goal is to find a policy pair for Alice and Bob that maximizes their expected total rewards based on an offline dataset collected a priori. The offline setting presents two challenges: (i) We cannot collect Bob's private information, leading to a confounding bias when using standard RL methods, and (ii) a distributional mismatch between the behavior policy used to collect data and the desired policy we aim to learn. To tackle the confounding bias, we treat Bob's previous action as an instrumental variable for Alice's current decision making so as to adjust for the unmeasured confounding. We develop a novel identification result and use it to propose a new off-policy evaluation (OPE) method for evaluating policy pairs in this two-player turn-based game. To tackle the distributional mismatch, we leverage the idea of pessimism and use our OPE method to develop an off-policy learning algorithm for finding a desirable policy pair for both Alice and Bob. Finally, we prove that under mild assumptions such as partial coverage of the offline data, the policy pair obtained through our method converges to the optimal one at a satisfactory rate.
translated by 谷歌翻译
This paper studies offline policy learning, which aims at utilizing observations collected a priori (from either fixed or adaptively evolving behavior policies) to learn an optimal individualized decision rule that achieves the best overall outcomes for a given population. Existing policy learning methods rely on a uniform overlap assumption, i.e., the propensities of exploring all actions for all individual characteristics are lower bounded in the offline dataset; put differently, the performance of the existing methods depends on the worst-case propensity in the offline dataset. As one has no control over the data collection process, this assumption can be unrealistic in many situations, especially when the behavior policies are allowed to evolve over time with diminishing propensities for certain actions. In this paper, we propose a new algorithm that optimizes lower confidence bounds (LCBs) -- instead of point estimates -- of the policy values. The LCBs are constructed using knowledge of the behavior policies for collecting the offline data. Without assuming any uniform overlap condition, we establish a data-dependent upper bound for the suboptimality of our algorithm, which only depends on (i) the overlap for the optimal policy, and (ii) the complexity of the policy class we optimize over. As an implication, for adaptively collected data, we ensure efficient policy learning as long as the propensities for optimal actions are lower bounded over time, while those for suboptimal ones are allowed to diminish arbitrarily fast. In our theoretical analysis, we develop a new self-normalized type concentration inequality for inverse-propensity-weighting estimators, generalizing the well-known empirical Bernstein's inequality to unbounded and non-i.i.d. data.
translated by 谷歌翻译